48 research outputs found

    Discussion on the paper: Hypotheses testing by convex optimization by Goldenshluger, Juditsky and Nemirovski

    Full text link
    We briefly discuss some interesting questions related to the paper "Hypotheses testing by convex optimization" by Goldenshluger, Juditsky and Nemirovski.Comment: To appear in the EJ

    Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions

    Full text link
    In this note, we consider the problem of aggregation of estimators in order to denoise a signal. The main contribution is a short proof of the fact that the exponentially weighted aggregate satisfies a sharp oracle inequality. While this result was already known for a wide class of symmetric noise distributions, the extension to asymmetric distributions presented in this note is new

    On estimation of the diagonal elements of a sparse precision matrix

    Full text link
    In this paper, we present several estimators of the diagonal elements of the inverse of the covariance matrix, called precision matrix, of a sample of iid random vectors. The focus is on high dimensional vectors having a sparse precision matrix. It is now well understood that when the underlying distribution is Gaussian, the columns of the precision matrix can be estimated independently form one another by solving linear regression problems under sparsity constraints. This approach leads to a computationally efficient strategy for estimating the precision matrix that starts by estimating the regression vectors, then estimates the diagonal entries of the precision matrix and, in a final step, combines these estimators for getting estimators of the off-diagonal entries. While the step of estimating the regression vector has been intensively studied over the past decade, the problem of deriving statistically accurate estimators of the diagonal entries has received much less attention. The goal of the present paper is to fill this gap by presenting four estimators---that seem the most natural ones---of the diagonal entries of the precision matrix and then performing a comprehensive empirical evaluation of these estimators. The estimators under consideration are the residual variance, the relaxed maximum likelihood, the symmetry-enforced maximum likelihood and the penalized maximum likelihood. We show, both theoretically and empirically, that when the aforementioned regression vectors are estimated without error, the symmetry-enforced maximum likelihood estimator has the smallest estimation error. However, in a more realistic setting when the regression vector is estimated by a sparsity-favoring computationally efficient method, the qualities of the estimators become relatively comparable with a slight advantage for the residual variance estimator.Comment: Companion R package at http://cran.r-project.org/web/packages/DESP/index.htm

    Sparse learning approach to the problem of robust estimation of camera locations

    Get PDF
    International audienceIn this paper, we propose a new approach--inspired by the recent advances in the theory of sparse learning-- to the problem of estimating camera locations when the internal parameters and the orientations of the cameras are known. Our estimator is defined as a Bayesian maximum a posteriori with multivariate Laplace prior on the vector describing the outliers. This leads to an estimator in which the fidelity to the data is measured by the L∞-norm while the regularization is done by the L1 -norm. Building on the papers [11, 15, 16, 14, 21, 22, 24, 18, 23] for L∞ -norm minimization in multiview geometry and, on the other hand, on the papers [8, 4, 7, 2, 1, 3] for sparse recovery in statistical framework, we propose a two-step procedure which, at the first step, identifies and removes the outliers and, at the second step, estimates the unknown parameters by minimizing the L∞ cost function. Both steps are fairly fast: the outlierremoval is done by solving one linear program (LP), while the final estimation is performed by a sequence of LPs. An important difference compared to many existing algorithms is that for our estimator it is not necessary to specify neither the number nor the proportion of the outliers
    corecore